How Many Iterations in the Gibbs Sampler ?

نویسندگان

  • Adrian E. Raftery
  • Steven Lewis
چکیده

When the Gibbs sampler is used to estimate posterior distributions (Gelfand and Smith, 1990), the question of how many iterations are required is central to its implementation. When interest focuses on quantiles of functionals of the posterior distribution , we describe an easily-implemented method for determining the total number of iterations required, and also the number of initial iterations that should be discarded to allow for \burn-in". The method uses only the Gibbs iterates themselves, and does not, for example, require external speciication of characteristics of the posterior density. Here the method is described for the situation where one long run is generated, but it can also be easily applied if there are several runs from diierent starting points. It also applies more generally to Markov chain Monte Carlo schemes other than the Gibbs sampler. It can also be used when several quantiles are to be estimated, when the quantities of interest are probabilities rather than full posterior distributions, and when the draws from the posterior distribution are required to be approximately independent. The method is applied to several diierent posterior distributions. These include a multivariate normal posterior distribution with independent parameters, a bimodal distribution, a \cigar-shaped" multivariate normal distribution in ten dimensions, and a highly complex 190-dimensional posterior distribution arising in spatial statistics. In each case the method appears to give satisfactory results. The results suggest that reasonable accuracy may often be achieved with 5,000 iterations or less; this can frequently be reduced to less than 1,000 if the posterior tails are known to be light. However, there are frequent \exceptions" when the required number of iterations is much higher. One important such exception is when there are high posterior correlations between the parameters; even crude correlation-removing The authors are grateful to Jeremy York for providing the data for Examples 4 and 5, for helping with the analysis and for useful discussions and suggestions, and to Julian Besag and an anonymous referee for helpful comments. A Fortran program called \Gibbsit" that implements the methods described here may be obtained from StatLib by sending an e-mail message to [email protected] containing the single line \send gibbsit from general". While the program is not maintained, questions about it may be addressed by e-mail to Adrian Raftery at [email protected]. 1 reparameterizations can greatly increase eeciency in such cases. Another important exception arises in hierarchical models when the Gibbs sampler tends to get \stuck"; …

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Three Short Papers on Sampling-based Inference: 1. How Many Iterations in the Gibbs Sampler? 2. Model Determination 3. Spatial Statistics

This technical report consists of three short papers on Monte Carlo Markov chain inference. The first paper, "How many iterations in the Gibbs sampler?," proposes an easily implemented method for determining the total number of iterations required to estimate probabilities and quantiles of the posterior distribution, and also the number of initial iterations that should be discarded to allow fo...

متن کامل

Rates of Convergence for Gibbs Sampling for Variance Component Models by

This paper analyzes the Gibbs sampler applied to a standard variance component model, and considers the question of how many iterations are required for convergence. It is proved that for K location parameters, with J observations each, the number of iterations required for convergence (for large K and J) is a constant times ( 1 + logK log J ) . This is one of the first rigorous, a priori resul...

متن کامل

Rates of Convergence for Gibbs Sampling for Variance Component Models

This paper analyzes the Gibbs sampler applied to a standard variance component model, and considers the question of how many iterations are required for convergence. It is proved that for K location parameters, with J observations each, the number of iterations required for convergence (for large K and J) is a constant times 1 + log K log J. This is one of the rst rigorous, a priori results abo...

متن کامل

On convergence of the EM algorithmand the Gibbs sampler

SUMMARY In this article we investigate the relationship between the two popular algorithms, the EM algorithm and the Gibbs sampler. We show that the approximate rate of convergence of the Gibbs sampler by Gaussian approximation is equal to that of the corresponding EM type algorithm. This helps in implementing either of the algorithms as improvement strategies for one algorithm can be directly ...

متن کامل

Enhanced sampling schemes for MCMC based blind Bernoulli-Gaussian deconvolution

This paper proposes and compares two new sampling schemes for sparse deconvolution using a Bernoulli-Gaussian model. To tackle such a deconvolution problem in a blind and unsupervised context, the Markov Chain Monte Carlo (MCMC) framework is usually adopted, and the chosen sampling scheme is most often the Gibbs sampler. However, such a sampling scheme fails to explore the state space efficient...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1992